Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Parallel design and implementation of synthetic view distortion change algorithm in reconfigurable structure
JIANG Lin, SHI Jiaqi, LI Yuancheng
Journal of Computer Applications    2021, 41 (6): 1734-1740.   DOI: 10.11772/j.issn.1001-9081.2020091462
Abstract280)      PDF (1262KB)(268)       Save
Focused on the high computational time complexity of the depth map based Synthesized View Distortion Change (SVDC) algorithm in 3D High Efficiency Video Coding (3D-HEVC), a new parallelization method of SVDC algorithm based on hybrid granularity was proposed under the reconfigurable array structure. Firstly, the SVDC algorithm was divided into two parts:Virtual View Synthesis (VVS) and distortion value calculation. Secondly, the VVS part was accelerated by pipeline operation, and the distortion value calculation part was accelerated by dividing into two levels:task level, which means dividing the synthesized image according to pixels, and instruction level, that is dividing the distortion values inside the pixel by the calculation process. Finally, a reconfigurable mechanism was used to parallelize the VVS part and distortion value calculation part. Theoretical analysis and hardware simulation results show that, in terms of execution time, the proposed method has the speedup ratio of 2.11 with 4 Process Elements (PEs). Compared with the SVDC algorithms based on Low Level Virtual Machine (LLVM) and Open Multi-Processing (OpenMP), the proposed method has the calculation time reduced by 18.56% and 21.93% respectively. It can be seen that the proposed method can mine the parallelism of the SVDC algorithm, and effectively shorten the execution time of the SVDC algorithm by combining with the characteristics of the reconfigurable array structure.
Reference | Related Articles | Metrics
Cross-social network user alignment algorithm based on knowledge graph embedding
TENG Lei, LI Yuan, LI Zhixing, HU Feng
Journal of Computer Applications    2019, 39 (11): 3198-3203.   DOI: 10.11772/j.issn.1001-9081.2019051143
Abstract497)      PDF (862KB)(293)       Save
Aiming at the poor network embedding performance of cross-social network user alignment algorithm and the inability to guarantee the quality of negative samples generated by negative sampling method, a cross-social network KGEUA (Knowledge Graph Embedding User Alignment) algorithm was proposed. In the embedding stage, some known anchor user pairs were used for the positive sample expansion, and the Near_K negative sampling method was proposed to generate negative examples. Finally, the two social networks were embedded into a unified low-dimensional vector space with the knowledge graph embedding method. In the alignment stage, the existing user similarity measurement method was improved, the proposed structural similarity was combined with the traditional cosine similarity to measure the user similarity jointly, and an adaptive threshold-based greedy matching method was proposed to align users. Finally, the newly aligned user pairs were added to the training set to continuously optimize the vector space. The experimental results show that the proposed algorithm has the hits@30 value of 67.7% on the Twitter-Foursquare dataset, which is 3.3 to 34.8 percentage points higher than that of the state-of-the-art algorithm, improving the user alignment performance effectively.
Reference | Related Articles | Metrics
Fault detection strategy based on local neighbor standardization and dynamic principal component analysis
ZHANG Cheng, GUO Qingxiu, FENG Liwei, LI Yuan
Journal of Computer Applications    2018, 38 (9): 2730-2734.   DOI: 10.11772/j.issn.1001-9081.2018010071
Abstract565)      PDF (785KB)(256)       Save
Aiming at the processes with dynamic and multimode characteristics, a fault detection strategy based on Local Neighbor Standardization (LNS) and Dynamic Principal Component Analysis (DPCA) was proposed. First, the K nearest neighbors set of each sample in training data set was found, then the mean and standard deviation of each variable were calculated. Next, the above mean and standard deviation were applied to standardize the current samples. At last, the traditional DPCA was applied in the new data set to determine the control limits of T 2 and SPE statistics respectively for fault detection. LNS can eliminate the multimode characteristic of a process and make the new data set follow a multivariate Gaussian distribution; meanwhile, the feature of a outlier deviating from normal trajectory can also be maintained. LNS-DPCA can reduce the impact of multimode structure and improve the detectability of fault in processes with dynamic property. The efficiency of the proposed strategy was implemented in a simulated case and the penicillin fermentation process. The experimental results indicate that the proposed method outperforms the Principal Component Analysis (PCA), DPCA and Fault Detection based on K Nearest Neighbors (FD- KNN).
Reference | Related Articles | Metrics
Batch process monitoring based on k nearest neighbors in discriminated kernel principle component space
ZHANG Cheng, GUO Qingxiu, LI Yuan
Journal of Computer Applications    2018, 38 (8): 2185-2191.   DOI: 10.11772/j.issn.1001-9081.2018020345
Abstract389)      PDF (977KB)(296)       Save
Aiming at the nonlinear and multi-mode features of batch processes, a fault detection method for batch process based on k Nearest Neighbors ( kNN) rule in Discriminated kernel Principle Component space, namely Dis-kPC kNN, was proposed. Firstly, in kernel Principal Component Analysis (kPCA), according to discriminating category labels, the kernel window width parameter was selected between within-class width and between-class width, thus the kernel matrix can effectively extract data correlation features and keep accurate category information. Then kNN rule was used to replace the conventional T 2 statistical method in the kernel principal component space, which can deal with fault detection of process with nonlinear and multi-mode features. Finally, the proposed method was validated in the numerical simulation and the semiconductor etching process. The experimental results show that the kNN rule in discriminated kernel principle component space can effectively deal with the nonlinear and multi-mode conditions, improve the computational efficiency and reduce memory consumption, in addition, the fault detection rate is significantly better than the comparative methods.
Reference | Related Articles | Metrics
Fault detection for multistage process based on improved local neighborhood standardization and kNN
FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
Journal of Computer Applications    2018, 38 (7): 2130-2135.   DOI: 10.11772/j.issn.1001-9081.2017112701
Abstract360)      PDF (905KB)(272)       Save
Concerning the problem that multistage process data has the characteristics of multi-center and different process structure, a fault detection based on Improved Local Neighborhood Standardization and k Nearest Neighbors (ILNS- kNN) method was proposed. Firstly, K local neighbor set of k neighbors of the sample was found. Secondly, the sample was standardized to obtain the standard sample by using mean and standard deviation of K local neighbor set. Finally, fault detection was carried out by calculating the cumulative neighbor distance of samples in the standard sample set. The center of each stage data was shifted to the origin by Improved Local Neighborhood Standardization (ILNS), and dispersion degree of each stage data was adjusted approximately to the same, then the multistage process data was fused to single stage data obeying multivariate Gauss distribution. The fault detection of penicillin fermentation process experiment was carried out. The experimental results show that the ILNS- kNN method has more than 97% detection rate for six types of faults. The ILNS- kNN method can detect faults not only in general multistage process, but also in multistage process with significant different variances. It is better to ensure the safety of multistage process and the high quality of product.
Reference | Related Articles | Metrics
Retinal vessel segmentation algorithm based on hybrid phase feature
LI Yuanyuan, CAI Yiheng, GAO Xurong
Journal of Computer Applications    2018, 38 (7): 2083-2088.   DOI: 10.11772/j.issn.1001-9081.2017123045
Abstract501)      PDF (1042KB)(322)       Save
Focusing on the issue that the phase consistency feature is deficient in detection of vascular center, a new retinal vessel segmentation algorithm based on hybrid phase feature was proposed. Firstly, an original retinal image was preprocessed. Secondly, every pixel was represented by a 4-D vector composed of Hessian matrix, Gabor transformation, Bar-selective Combination Of Shifted FIlter REsponses (B-COSFIRE) and phase feature. Finally, Support Vector Machine (SVM) was used for pixel classification to realize the segmentation of retinal vessels. Among the four features, phase feature was a new hybrid phase feature formed by phase consistency feature and Hessian matrix feature through wavelet fusion. This new phase feature not only preserves good vascular edge information by phase consistency feature, but also compensates for the deficient detection of vascular center by phase consistency feature. The average Accuracy (Acc) of the proposed algorithm evaluated on the Digital Retinal Images for Vessel Extraction (DRIVE) database is 0.9574, and the average Area Under receiver operating characteristic Curve (AUC) is 0.9702. In the experiment of using single feature for vessel extraction through pixel classification, compared with using phase consistency feature, using hybrid phase feature for vessel extraction improves the average Accuracy (Acc) from 0.9191 to 0.9478, the AUC from 0.9359 to 0.9702. The experimental results show that hybrid phase feature is more suitable for retinal vessel segmentation based on pixel classification than phase consistency feature.
Reference | Related Articles | Metrics
Local outlier factor fault detection method based on statistical pattern and local nearest neighborhood standardization
FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
Journal of Computer Applications    2018, 38 (4): 965-970.   DOI: 10.11772/j.issn.1001-9081.2017092310
Abstract387)      PDF (783KB)(371)       Save
A Local Outlier Factor fault detection method based on Statistics Pattern and Local Nearest neighborhood Standardization (SP-LNS-LOF) was proposed to deal with the problem of unequal batch length, mean drift and different batch structure of multi-process data. Firstly, the statistical pattern of each training sample was calculated; secondly, each statistical modulus was standardized as standard sample by using the set of local neighbor samples; finally the local outlier factor of standard sample was calculated and used as a detection index. The quintile of the local outlier factor was used as the detection control limit, when the local outlier factor of the online sample was greater than the detection control limit, the online sample was identified as a fault sample, otherwise it was a normal sample. The statistical pattern was used to extract the main information of the process and eliminate the impact of unequal length of batches; the local neighborhood normalization was used to overcome the difficulties of mean shift and different batch structure of process data; the local outlier factor was used to measure the similarity of samples and separate the fault samples from the normal samples. The simulation experiment of semiconductor etching process was carried out. The experimental results show that SP-LNS-LOF detects all 21 faults, and has higher detection rate than that of Principal Component Analysis (PCA), kernel PCA (kPCA), Fault Detection using k Nearest Neighbor rule (FD-kNN) and Local Outlier Factor (LOF) methods. The theoretical analysis and simulation result show that SP-LNS-LOF is suitable for fault detection of multimode process, and has high fault detection efficiency and ensures the safety of the production process.
Reference | Related Articles | Metrics
Multi-modal process fault detection method based on improved partial least squares
LI Yuan, WU Haoyu, ZHANG Cheng, FENG Liwei
Journal of Computer Applications    2018, 38 (12): 3601-3606.   DOI: 10.11772/j.issn.1001-9081.2018051183
Abstract299)      PDF (908KB)(300)       Save
Partial Least Squares (PLS) as the traditional data-driven method has the problem of poor performance of multi-modal data fault detection. In order to solve the problem, a new fault detection method was proposed, which called PLS based on Local Neighborhood Standardization (LNS) (LNS-PLS). Firstly, the original data was Gaussized by LNS method. On this basis, the monitoring model of PLS was established, and the control limits of T 2 and Squared Prediction Error (SPE) were determined. Secondly, the test data was also standardized by the LNS, and then the PLS monitoring indicators of test data were calculated for process monitoring and fault detection, which solved the problem of unable to deal with multi-modal by PLS. The proposed method was applied to numerical examples and penicillin production process, and its test results were compared with those of Principal Component Analysis (PCA), K Nearest Neighbors ( KNN) and PLS. The experimental results show that, the proposed method is superior to PLS, KNN and PCA in fault detection. The proposed method has high accuracy in classification and multi-modal process fault detection.
Reference | Related Articles | Metrics
Lip motion recognition of speaker based on SIFT
MA Xinjun, WU Chenchen, ZHONG Qianyuan, LI Yuanyuan
Journal of Computer Applications    2017, 37 (9): 2694-2699.   DOI: 10.11772/j.issn.1001-9081.2017.09.2694
Abstract534)      PDF (914KB)(427)       Save
Aiming at the problem that the lip feature dimension is too high and sensitive to the scale space, a technique based on the Scale-Invariant Feature Transform (SIFT) algorithm was proposed to carry out the speaker authentication. Firstly, a simple video frame image neat algorithm was proposed to adjust the length of the lip video to the same length, and the representative lip motion pictures were extracted. Then, a new algorithm based on key points of SIFT was proposed to extract the texture and motion features. After the integration of Principal Component Analysis (PCA) algorithm, the typical lip motion features were obtained for authentication. Finally, a simple classification algorithm was presented according to the obtained features. The experimental results show that compared to the common Local Binary Pattern (LBP) feature and the Histogram of Oriental Gradient (HOG) feature, the False Acceptance Rate (FAR) and False Rejection Rate (FRR) of the proposed feature extraction algorithm are better, which proves that the whole speaker lip motion recognition algorithm is effective and can get the ideal results.
Reference | Related Articles | Metrics
Approach for hesitant fuzzy two-sided matching decision making under unknown attribute weights
LIN Yang, LI Yuansheng, WANG Yingming
Journal of Computer Applications    2016, 36 (8): 2268-2273.   DOI: 10.11772/j.issn.1001-9081.2016.08.2268
Abstract551)      PDF (838KB)(316)       Save
To deal with Two-Sided Matching (TSM) problem based on Hesitant Fuzzy Value (HFV) of unknown weights, a multi-attribute matching decision making approach was proposed. To begin with, the weight information was determined by maximizing the sum of deviations of the given values in terms of HFVs with multi-attribute evaluated by both two-sided Agents. Then, the matching degree could be aggregated via an operation of adjusted hesitant fuzzy weighted averaging with obtained weights and multi-attribute information. In addition, a multi-objective optimization model was established based on the matching degree of two sides. By solving this model into single objective optimization model in min-max method, the matching scheme was generated. Finally, a numerical illustration and comparison was taken, the solutions of objectives by the proposed method were respectively 1.689 and 1.575, and a unique matching scheme was obtained. The experimental results show that the proposed method can avoid multiple solutions caused by subjective weights of goal functions.
Reference | Related Articles | Metrics
Mobile social network oriented user feature recognition of age and sex
LI Yuanhao, LU Ping, WU Yifan, WEI Wei, SONG Guojie
Journal of Computer Applications    2016, 36 (2): 364-371.   DOI: 10.11772/j.issn.1001-9081.2016.02.0364
Abstract524)      PDF (1248KB)(1056)       Save
Mobile social network data has complex network structure, mutual label influence between nodes, variety of information including interactive information, location information, and other complex information. As a result, it brings many challenges to identify the characteristics of the user. In response to these challenges, a real mobile network was studied, the differences between the tagged users with different characteristics were extracted using statistical analysis, then the user's features of age and sex were recognized using relational Markov network prediction model. Analysis shows that the user of different age and sex has significant difference in call probability at different times, call entropy, distribution and discreteness of location information, gather degree in social networks, as well as binary and ternary interaction frequency. With these features, an approach for inferring the user's age and gender was put forward, which used the binary and ternary interaction relation group template, combined with the user's own temporal and spatial characteristics, and calculated the total joint probability distribution by relational Markov network. The experimental results show that the prediction accuracy of the proposed recognition model is at least 8% higher compared to the traditional classification methods, such as C4.5 decision tree, random forest, Logistic regression and Naive Bayes.
Reference | Related Articles | Metrics
Data recovery algorithm in chemical process based on locally weighted reconstruction
GUO Jinyu, YUAN Tangming, LI Yuan
Journal of Computer Applications    2016, 36 (1): 282-286.   DOI: 10.11772/j.issn.1001-9081.2016.01.0282
Abstract440)      PDF (800KB)(288)       Save
According to phenomenon of missing data in the chemical process, a Locally Weighted Recovery Algorithm (LWRA) for dealing with missing data in the chemical process was proposed based on preserving the local data structure characteristic. The missing data points were located and marked with the symbol NaN (Not a Number), the missing data set was divided into complete data set and incomplete data set. The corresponding k nearest neighbors of incomplete data set were found in the complete data according to the size of integrity in turn, and the corresponding weights of k nearest neighbors were calculated according to the principle of minimum error sum of squares. Finally, the missing data points were reconstructed by k nearest neighbors and their corresponding weights. The algorithm was applied into two types of chemical process data with different missing rates and compared with two traditional data recovery algorithms, Expectation Maximization Principal Component Analysis (EM-PCA) and Mean Algorithm (MA). The results reveal that the proposed method has the lowest error, and the computation speed increases by 2 times in average than EM-PCA. The experimental results demonstrate that the proposed algorithm can not only recover data efficiently but also improve the utilization rate of the data, and it's suitable for nonlinear chemical process data recovery.
Reference | Related Articles | Metrics
Nonlinear feature extraction based on discriminant diffusion map analysis
ZHANG Cheng, LIU Yadong, LI Yuan
Journal of Computer Applications    2015, 35 (2): 470-475.   DOI: 10.11772/j.issn.1001-9081.2015.02.0470
Abstract509)      PDF (868KB)(364)       Save

Aiming at that high-dimensional data is hard to be understood intuitively, and cannot be effectively processed by traditional machine learning and data mining techniques, a new method for nonlinear dimensionality reduction called Discriminant Diffusion Maps Analysis (DDMA) was proposed. It was implemented by applying a discriminant kernel scheme to the framework of the diffusion maps. The Gaussian kernel window width was selected from the within-class width and the between-class width according to discriminating sample category labels, it made kernel function effectively extract data correlation features and exactly describe the structure characteristics of data space. The DDMA was used in artificial Swiss-roll test and penicillin fermentation process, with comparisons with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Kernel Principle Components Analysis (KPCA), Laplacian Eigenmaps (LE) and Diffusion Maps (DM). The results show that DDMA represents the high-dimensional data in a low-dimensional space while successfully retaining original characteristics of the data; in addition, the data structure features in low-dimensional space generated by DDMA are superior to those generated by the comparison methods, the performance of data dimension reduction and feature extraction verifies effectiveness of the proposed scheme.

Reference | Related Articles | Metrics
Constructing high-level architecture of online social network through community detection
QIU Dehong, XU Fangxiang, LI Yuan
Journal of Computer Applications    2015, 35 (10): 2737-2741.   DOI: 10.11772/j.issn.1001-9081.2015.10.2737
Abstract418)      PDF (700KB)(420)       Save
The online social network poses severe challenges because of its large size and complex structure. It is meaningful to construct a concise high-level architecture of the online social network. The concise high-level architecture was composed of the communities, the hub nodes and the relationships between them. The original online social network was represented by a new representation named quantitative attribute graph, and a new method was proposed to construct the concise high-level architecture of the online social network. The communities were detected by using the attributes of the nodes and edges in combination, then the hub nodes were identified based on the found communities, and the relationships between the communities and hub nodes were reproduced. The new method was used to construct the concise high-level architecture of a large online social network extracted from a practical business Bulletin Board System (BBS). The experimental results show that the proposed method has a good performance when the relationship strength and the community size are set as 0.5 and 3 respectively.
Reference | Related Articles | Metrics
Effective inter-cell interference coordination scheme in long term evolution femtocell network based on soft frequency reuse
LI Yanan SU Hansong LIU Gaohua LI Yuan
Journal of Computer Applications    2014, 34 (5): 1239-1242.   DOI: 10.11772/j.issn.1001-9081.2014.05.1239
Abstract360)      PDF (712KB)(359)       Save

Femtocell is a small low powered base station which can provide an increase in system capacity and better indoor coverage for two-tier Long Term Evolution (LTE) network. However, interference problem between the femtocell and the Microcell eNodeB (MeNB) should be solved in advance. Concerning the interference between them, an effective Inter-Cell Interference Coordination (ICIC) scheme using Soft Frequency Reuse (SFR) was proposed in LTE femtocell system. Under the macrocell pre-allocating frequency band by the SFR, the femtocell user equipments chose sub-bands which were not used in the macrocell sub-area to avoid co-channel interference. At the same time, when the femtocell was located in the center of a macrocell, it was not going to select the sub-bands which were occupied by the boundary region of the same sector. Simulation results show that the proposal scheme improves the throughput performance of overall network by 14% compared to the situation without ICIC, and the average throughput of cell edge users increases by 34% at least.

Reference | Related Articles | Metrics
Personalized recommendation algorithm integrating roulette walk and combined time effect
ZHAO Ting XIAO Ruliang SUN Cong CHEN Hongtao LI Yuanxin LI Hongen
Journal of Computer Applications    2014, 34 (4): 1114-1117.   DOI: 10.11772/j.issn.1001-9081.2014.04.1114
Abstract504)      PDF (790KB)(451)       Save

The traditional graph-based recommendation algorithm neglects the combined time factor which results in the poor recommendation quality. In order to solve this problem, a personalized recommendation algorithm integrating roulette walk and combined time effect was proposed. Based on the user-item bipartite graph, the algorithm introduced attenuation function to quantize combined time factor as association probability of the nodes; Then roulette selection model was utilized to select the next target node according to those associated probability of the nodes skillfully; Finally, the top-N recommendation for each user was provided. The experimental results show that the improved algorithm is better in terms of precision, recall and coverage index, compared with the conventional PersonalRank random-walk algorithm.

Reference | Related Articles | Metrics
Identification method of system reliability structure
LI Qingmin LI Hua XU Li YUAN Wei
Journal of Computer Applications    2014, 34 (11): 3340-3343.   DOI: 10.11772/j.issn.1001-9081.2014.11.3340
Abstract181)      PDF (591KB)(453)       Save

In integrated support engineering, the number of components in reliability block diagram is large, the level of mastering the principle of system is required to be high and the operational data is always incomplete. To resolve these problems, a method that identifies the reliability structure of system using the information of operational data and the reliability of the units was proposed. The system reliability was estimated by using the system performance information. In addition, all reliability structure models was traversed and the theoretical reliability was calculated with the system's units reliability information, then the deviations between the estimated value of system reliability and all the reliability theoretical values were calculated, and the identification results by the first N reliability structure models of the lowest deviation was outputted after sorting the deviations. The calculation results of a given example show that the combined system based on the voting reliability structure can be identified with the probability of around 80%, decreases to 3% of the scope out of all possible forms, it can significantly reduce the workload of the researcher to identify the system reliability structure.

Reference | Related Articles | Metrics
Application of kernel parameter discriminant method in kernel principal component analysis
ZHANG Cheng LI Na LI Yuan PANG Yujun
Journal of Computer Applications    2014, 34 (10): 2895-2898.   DOI: 10.11772/j.issn.1001-9081.2014.10.2895
Abstract185)      PDF (549KB)(476)       Save

In this paper, aiming at the priority selection of the Gaussian kernel parameter (β) in the Kernel Principal Component Analysis (KPCA), a kernel parameter discriminant method was proposed for the KPCA. It calculated the kernel window widths in the classes and between two classes for the training samples.The kernel parameter was determined with the discriminant method for the kernel window widths. The determined kernel matrix based on the discriminant selected kernel parameter could exactly describe the structure characteristics of the training space. In the end, it used Principal Component Analysis (PCA) to the decomposition for the feature space, and obtained the principal component to realize dimensionality reduction and feature extraction. The method of discriminant kernel window width chose smaller window width in the dense regions of classification, and larger window width in the sparse ones. The simulation of the numerical process and Tennessee Eastman Process (TEP) using the Discriminated Kernel Principle Component Analysis (Dis-KPCA) method, by comparing with KPCA and PCA, show that Dis-KPCA method is effective to the sample data dimension reduction and separates three classes of data by 100%,therefore, the proposed method has higher precision of dimension reduction.

Reference | Related Articles | Metrics
Performance of PCM/FM telemetry system based on multi-symbol detection and Turbo product code
WANG Li YUAN Fu XIANG Liangjun ZHENG Linhua
Journal of Computer Applications    2013, 33 (12): 3482-3485.  
Abstract724)      PDF (631KB)(714)       Save
Multi-Symbol Detection (MSD) and Turbo Product Code (TPC) can greatly improve the performance of PCM/FM (Pulse Code Modulation/Frequency Modulation) telemetry system. To solve the high computational complexity issues in MSD algorithm, an improved algorithm which reduced the computational complexity of MSD was proposed. Chase decoding algorithm for TPC also reduced the system memories by simplifying the calculation of the soft input information. The simulation results show that despite of 1.7dB loss, the improved algorithm still obtains about 8dB performance gain. Because of low-complexity and low system memories, it is more suitable for hardware implementation.
Related Articles | Metrics
Delay-constrained dynamic non-rearranged multicast routing optimization
LIU Wei-qun LI Yuan-chen
Journal of Computer Applications    2012, 32 (05): 1244-1246.  
Abstract874)      PDF (2117KB)(689)       Save
After researching the delay-constrained multicast routing algorithm, a new dynamic non-rearranged multicast routing algorithm,Non-rearranged Dynamic Multicast Algorithm with Delay-Constraint (NDMADC) was proposed in this paper. Combining algorithm DGA (Dynamic Greedy Algorithm) and Floyd optimization of the shortest path, NDMADC ensured that under the premise of satisfying delay constraint, the node can dynamically select minimum cost path to multicast tree to join the multicast session. What's more, due to the adoption of greedy algorithm, NDMADC needs not to restructure multicast tree when a node joins it. The simulation results show that the algorithm can not only construct correctly multicast tree to meet delay constraint but also has low cost and complexity.
Reference | Related Articles | Metrics
Research of atomicity in workflow transaction
Ya-li YUAN Hong-mei CHEN
Journal of Computer Applications    2011, 31 (07): 1765-1768.   DOI: 10.3724/SP.J.1087.2011.01765
Abstract1034)      PDF (631KB)(775)       Save
Traditional workflow management systems lack the abilities of handling transactions, thus cannot recover quickly enough. This paper introduced a prototype of transactional workflow by adding failure mode in modeling. When a task failed, the system could invoke the transactional algorithm with loosed atomicity to assure the data consistence, as well as decrease the work of participants. The experimental result shows that the workflow management system with transaction charactors can recover efficiently after failure.
Reference | Related Articles | Metrics
Efficient real-time processing for readonlytransaction in mobile broadcast environments
Xiang-Dong LEI Yue-Long ZHAO Song-Qiao CHEN Xiao-Li YUAN
Journal of Computer Applications   
Abstract1544)      PDF (810KB)(837)       Save
The new method processing mobile real-time read-only transactions was proposed in mobile broadcast environments. Various multiversion broadcast disk organizations were introduced. Mobile read-only transactions could be committed with no-blocking by multiversion mechanism. The conflicts between mobile read-only and mobile update transactions could be eliminated by optimistic method. To avoid unnecessary restarts of transactions, multiversion dynamic adjustment of serialization order was adopted. If a readonly transaction passed all the backward validation in MH, it could be committed without contacting with the server. Response time of mobile read-only transactions was greatly reduced. The results of simulation experiment show that the new method performs better than other protocols.
Related Articles | Metrics
Application of pseudo-Zernike moments in image reconstruction
HU Hui-jun, LI Yuan-xiang, LIU Mao-fu
Journal of Computer Applications    2005, 25 (03): 592-593.   DOI: 10.3724/SP.J.1087.2005.0592
Abstract965)      PDF (98KB)(1103)       Save

seudo-Zernike is a kind of region-based shape descriptor. The concept of the pseudo-Zernike moments were introduced, and their good characteristics, such as invariance, robustness and effectiveness, were discussed. Images could be reconstructed based on invariant pseudo-Zernike moments. Experiment results demonstrate the feasibility of the image reconstruction based on the improved pseudo-Zernike moments.

Related Articles | Metrics
Optimization of tensor virtual machine operator fusion based on graphic rewriting and fusion exploration
WANG Na, JIANG Lin, LI Yuancheng, ZHU Yun
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023091252
Online available: 15 March 2024